Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural network
نویسنده
چکیده
In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely used in physics. But for the same reason, the classical backpropagation delta rule for the MLP network cannot be used. The general equation for the backpropagation generalized delta rule for the Sigma-if neural network is derived and a selection of experimental results that confirm its usefulness are presented.
منابع مشابه
Generalized sigma-derivation on Banach algebras
Let $mathcal{A}$ be a Banach algebra and $mathcal{M}$ be a Banach $mathcal{A}$-bimodule. We say that a linear mapping $delta:mathcal{A} rightarrow mathcal{M}$ is a generalized $sigma$-derivation whenever there exists a $sigma$-derivation $d:mathcal{A} rightarrow mathcal{M}$ such that $delta(ab) = delta(a)sigma(b) + sigma(a)d(b)$, for all $a,b in mathcal{A}$. Giving some facts concerning general...
متن کاملSigma-if neural network as the use of selective attention technique in classification and knowledge discovery problems solving
The article presents the most important properties of Sigma-if neuron and neural network, which use a selective attention technique to solve classification problems. Abilities of Sigma-if neuron to perform active aggregation of input signals and to solve linearly inseparable problems are discussed. Variety of conducted experiments, during which Sigma-if network was compared with multilayer perc...
متن کاملA Comparison of Concept Identification in Human Learning and Network Learning with the Generalized Delta Rule
The generalized delta rule (which is also known as error backpropagation) is a significant advance over previous procedures for network learning. In this paper, we compare network learning using the generalized delta rule to human learning on two concept identification tasks: • Relative ease of concept identification • Generalizing from incomplete data
متن کاملFaster Learning for Dynamic Recurrent Backpropagation
The backpropagation learning algorithm for feedforward networks (Rumelhart et al. 1986) has recently been generalized to recurrent networks (Pineda 1989). The algorithm has been further generalized by Pearlmutter (1989) to recurrent networks that produce time-dependent trajectories. The latter method requires much more training time than the feedforward or static recurrent algorithms. Furthermo...
متن کاملColor Space Transformation from RGB to CIELAB Using Neural Networks
Transformations in digital color imaging from RGB to CIELAB are compared between conventional ICC profiles and a newly developed neural network model. The accuracy of the transformations are computed in terms of Delta E and a comparison is made between the ICC profile and a neural network implemented in MATLAB. The transformations are used to characterize and test the color response of an Epson...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Applied Mathematics and Computer Science
دوره 22 شماره
صفحات -
تاریخ انتشار 2012